14 - Lesson Learnt: Modularization of Deep Networks Allows Cross-Modality Reuse [ID:12856]
50 von 75 angezeigt

So as shown in the title, I'm going to present a specific case for modularization of deep

networks that allows cross-modality reuse.

I joined the lab for quite a while already, but I officially became a PhD candidate since

last April. And since then I have a few publications and today I'm going to focus on a publication

in the last BVM. But at the end of the story, I'm going to talk about interpretable network.

So I have been working with the FrenzyNet for a while. For those who are not familiar

with my work, so FrenzyNet is basically a neural network counterpart of the Frenzy

Vassanus filter. In the FrenzyNet we have the Frenzy filter step by step translated

into neural network language like convolutional layers and max pooling and mathematical operation

layers. Why do we want to do this? It's because by translating it into a neural network without

training it's already performing as the classical methods already and with fine-tuning the network

is basically guaranteed to have a better performance. But the problem of such a method is that it

does not guarantee state-of-the-art performance as from my experience like the FrenzyNet does

not reach the performance of the unit on retinal vessel segmentation task. So what we did in

the next step is that we want to build an interpretable network pipeline and this is

my work in Las Mikai basically. What we have here is a pre-processing network. So in this

case it's a UNET or we could also use a guided filter layer. We add a regularizer to the

UNET to guarantee that the output from the pre-processing step resembles the input of

the pre-processing step. Then the output of the UNET pre-processing unit is fed into the

FrenzyNet for vessel segmentation. With such a network pipeline what we have is an interpretable

network. Also the performance is boosted. It basically reaches the state-of-the-art. So

we're happy about this. But as shown in the title today we want to focus on the modularization

of networks. And we're going to focus on the UNET pre-processing module. We have a look

at the UNET pre-processing UNET results from Fondos images. So here we have the original

image and without the regularizer we get something like basically it's a vessel segmentation

already without the background in homogeneous illumination here. But if we add a regularizer

like L2 regularizer to both ends of the UNET what we have is a picture that resembles the

original image. But we have the low frequency information and what we have is also very

smooth background and the edge is preserved. It seems that the UNET is trained to be edge

preserving denoising filter. We're happy with this. The next question is, is this kind of

pre-processing module transferable? We tried to apply the UNET without further fine-tuning

directly on another database. So basically on another data modality. We tried on OCTA

data and surprisingly the results is very satisfactory. So we have the original image

here. You can see it's very noisy and the vessel seems to be connected but it's not

very connected. And with, we feed this image like we need to do some data arrangement so

that you know we need black ridges instead of white ridges and we need to shift the data

range a little bit but everything is linear. Then we feed it into the pre-processing UNET

we get something like this. And so I think it's pretty clear visually that we have a

smoother background and very neat vessels here from the pre-processed image. And if

you blend these two you get something like this which is more realistic again. But you

can see the ridges are very nice and the background is very smooth. The next question is how good

is the pre-processing procedure? We don't want the pre-processing UNET to imagine too

much. So in the last BVM paper we did a user study. We invited five OCTA experts and asked

them to grade the pre-processed image, the raw input and the blend image with respect

to image quality like noise level and also the vessel connectivity and also the diagnostic

quality. We asked them to grade them from 1 to 5. 1 is very good, 5 is very bad. Basically

from the summarized user study we can see that people basically agree that we are getting

better image qualities after the pre-processing UNET. But of course we want to evaluate our

pre-processing methods more quantitatively. We want to have numbers to say how good it

is. For this, Mikhaj, we got some new data from Lennart. So this is OCTA data. So the

Teil einer Videoserie :

Presenters

M. Sc. Weilin Fu M. Sc. Weilin Fu

Zugänglich über

Offener Zugang

Dauer

00:09:15 Min

Aufnahmedatum

2020-02-18

Hochgeladen am

2020-02-18 18:04:25

Sprache

en-US

Tags

dimension image based work evaluation graph number channel lift
Einbetten
Wordpress FAU Plugin
iFrame
Teilen